<The arguments for are pretty bad upon closer scrutiny, and are almost certainly rationalizations rather than rationality. Sorry.
Unsubstantiated assertion.
It is incredibly unlikely to find yourself in the world where the significant insights about real doomsday is coming from single visionary who did so little that can be unambiguously graded,
Interesting mixture of misunderstanding how probability works, and an ad-hominem.
Unless the work is in fact focussed in some secret FAI effort, it seems likely that some automated software development tool would foom, reaching close to absolute maximum optimality on certain hardware. But will remain a tool. Availability of such ultra optimal tools in all aspects of software and hardware design would greatly decrease the advantage that self willed UFAI might have.
This is a genuinely interesting thought—is such a tool achievable, theoretically, and what would it do to the early development of a FOOM brain? A post about this would be worth a read.
if I point out that the AI has good reasons not to kill us all due to it not being able to determine if it is within top level world or a simulator or within engineering test sim. It is immediately conjectured that we will still ‘lose’ something because it’ll take up some resources in space. That is rationalization. Privileging a path of thought.
Or because your objections don’t actually solve the underlying problems. You get the same kind of ‘updating’ if you respond to the thermodynamic objections to your car with a windmill on top by suggesting that you stand on the roof and blow. Also, you should note that you listed one of the weakest possible counter-arguments to your own argument, which is bad practice, rationality-wise.
The botched FAI attempts have their specific risk—euthanasia, wireheading, and so on, which don’t exist for an AI that is not explicitly friendly.
Dead wrong. You’ve clearly never been eaten by a paperclip maximizer.
and if it would, it would wirehead rather than add more hardware; and it would be incredibly difficult to prevent it from doing so.
Unsubstantiated assertion.
EY very strongly pattern-matches to this friend of mine, and focusses very hard on the known unknowns aspect of the problem about which we know very little—which can easily steer one into a very dangerous zone full of unknown unknowns—the not-quite-FAIs that euthanize us or worse.
Ad-hominem, and the rest is just unwarranted condescension. There’s a good post to be be written about a skeptical approach to AI-risk scenarios, and this is definitely not it.
Okay, then, you’re right: the manner of presentation of the AI risk issue on lesswrong somehow makes a software developer respond with incredibly bad and unsubstantiated objections.
Why when bunch of people get together, they don’t even try to evaluate the impression they make on 1 individual? (except very abstractly)
I’m a software developer too (in training, anyway). Sometimes I’m wrong about things. It’s not unusual, or the fault of the material I was reading when I made the mistake. I’m not even certain you’re wrong. What I am certain of is that your provided argument does not support, or even strongly imply your stated thesis. If you want to change my mind, then give me something to work with.
EDIT: You’re right about one thing—Less Wrong has a huge image problem; but that’s entirely tangential to the question at issue.
What I am certain of is that your provided argument does not support, or even strongly imply your stated thesis.
I know this. I am not making argument here (or actually, trying not to). I’m stating my opinion, primarily on presentation of the argument. If you want argument, you can e.g. see what Hansen has to say about foom. It is, deliberately this way. I am not some messiah hell bent on rescuing you from some wrongness (that would be crazy).
In that case, you might want to consider rewriting your post. Right now, the crazy messiah vibe is coming through very strongly. Either back it up and stop wasting our time, or rewrite it to assert less social dominance. If you do the latter without the former, people get cranky.
I’m mainstream, you guys are fringe, do you understand? I am informing you that you are not only not convincing, but look like complete clowns who don’t know big O from a letter of alphabet. I know you want to do better than this. And I know some of people here have technical knowledge.
Unsubstantiated assertion.
Interesting mixture of misunderstanding how probability works, and an ad-hominem.
This is a genuinely interesting thought—is such a tool achievable, theoretically, and what would it do to the early development of a FOOM brain? A post about this would be worth a read.
Or because your objections don’t actually solve the underlying problems. You get the same kind of ‘updating’ if you respond to the thermodynamic objections to your car with a windmill on top by suggesting that you stand on the roof and blow. Also, you should note that you listed one of the weakest possible counter-arguments to your own argument, which is bad practice, rationality-wise.
Dead wrong. You’ve clearly never been eaten by a paperclip maximizer.
Unsubstantiated assertion.
Ad-hominem, and the rest is just unwarranted condescension. There’s a good post to be be written about a skeptical approach to AI-risk scenarios, and this is definitely not it.
Okay, then, you’re right: the manner of presentation of the AI risk issue on lesswrong somehow makes a software developer respond with incredibly bad and unsubstantiated objections.
Why when bunch of people get together, they don’t even try to evaluate the impression they make on 1 individual? (except very abstractly)
Er… what?
I’m a software developer too (in training, anyway). Sometimes I’m wrong about things. It’s not unusual, or the fault of the material I was reading when I made the mistake. I’m not even certain you’re wrong. What I am certain of is that your provided argument does not support, or even strongly imply your stated thesis. If you want to change my mind, then give me something to work with.
EDIT: You’re right about one thing—Less Wrong has a huge image problem; but that’s entirely tangential to the question at issue.
I know this. I am not making argument here (or actually, trying not to). I’m stating my opinion, primarily on presentation of the argument. If you want argument, you can e.g. see what Hansen has to say about foom. It is, deliberately this way. I am not some messiah hell bent on rescuing you from some wrongness (that would be crazy).
In that case, you might want to consider rewriting your post. Right now, the crazy messiah vibe is coming through very strongly. Either back it up and stop wasting our time, or rewrite it to assert less social dominance. If you do the latter without the former, people get cranky.
I’m mainstream, you guys are fringe, do you understand? I am informing you that you are not only not convincing, but look like complete clowns who don’t know big O from a letter of alphabet. I know you want to do better than this. And I know some of people here have technical knowledge.
Yes, this is what is meant by “assert social dominance”. The suggestion was to do less of it, though, not more.